global round
7 Appendix 494 7.1 Polynomial regression examples
The algorithm is applied using fixed trust weights with 1/3 in each entry and is chosen as 1. 's are positive, and thus we have any product of Being fully connected implies being strongly connected. By the product being positive, we also have its diagonal entries being all positive. 's (of whatever length) is SIA. It is easy to verify that a positive matrix is always a scrambling matrix. As already has high diagonal values, the claim follows. 's to have such that a low quality node Thus, Proposition 4 is proved.
Hierarchical Federated Learning for Social Network with Mobility
Chen, Zeyu, Chen, Wen, Li, Jun, Wu, Qingqing, Ding, Ming, Han, Xuefeng, Deng, Xiumei, Wang, Liwei
Federated Learning (FL) offers a decentralized solution that allows collaborative local model training and global aggregation, thereby protecting data privacy. In conventional FL frameworks, data privacy is typically preserved under the assumption that local data remains absolutely private, whereas the mobility of clients is frequently neglected in explicit modeling. In this paper, we propose a hierarchical federated learning framework based on the social network with mobility namely HFL-SNM that considers both data sharing among clients and their mobility patterns. Under the constraints of limited resources, we formulate a joint optimization problem of resource allocation and client scheduling, which objective is to minimize the energy consumption of clients during the FL process. In social network, we introduce the concepts of Effective Data Coverage Rate and Redundant Data Coverage Rate. We analyze the impact of effective data and redundant data on the model performance through preliminary experiments. We decouple the optimization problem into multiple sub-problems, analyze them based on preliminary experimental results, and propose Dynamic Optimization in Social Network with Mobility (DO-SNM) algorithm. Experimental results demonstrate that our algorithm achieves superior model performance while significantly reducing energy consumption, compared to traditional baseline algorithms.
- Oceania > Australia > New South Wales > Sydney (0.14)
- North America > United States (0.14)
- Asia > China > Shanghai > Shanghai (0.05)
- (5 more...)
- Personal (0.93)
- Research Report > New Finding (0.34)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.86)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.67)
Widening the Network Mitigates the Impact of Data Heterogeneity on FedAvg
Federated learning (FL) enables decentralized clients to train a model collaboratively without sharing local data. A key distinction between FL and centralized learning is that clients' data are non-independent and identically distributed, which poses significant challenges in training a global model that generalizes well across heterogeneous local data distributions. In this paper, we analyze the convergence of overparameterized FedAvg with gradient descent (GD). We prove that the impact of data heterogeneity diminishes as the width of neural networks increases, ultimately vanishing when the width approaches infinity. In the infinite-width regime, we further prove that both the global and local models in FedAvg behave as linear models, and that FedAvg achieves the same generalization performance as centralized learning with the same number of GD iterations. Extensive experiments validate our theoretical findings across various network architectures, loss functions, and optimization methods.
Few-Round Learning for Federated Learning (Supplementary Material) Y ounghyun Park
This latter observation is expected given the different design objectives. Recall that this choice was made as computing the double derivative terms would have required extra communication bandwidth as well increased computational load. The number of participating clients is set to 10. Comparison with personalized FL: Performance with both unseen/seen classes at deployment. Specifically, we decrease the number of data in each episode from 6000 to 1200 in CIFAR-100, so that each user holds only 120 images.
FedABC: Attention-Based Client Selection for Federated Learning with Long-Term View
Ye, Wenxuan, An, Xueli, Wang, Junfan, Yan, Xueqiang, Carle, Georg
Native AI support is a key objective in the evolution of 6G networks, with Federated Learning (FL) emerging as a promising paradigm. FL allows decentralized clients to collaboratively train an AI model without directly sharing their data, preserving privacy. Clients train local models on private data and share model updates, which a central server aggregates to refine the global model and redistribute it for the next iteration. However, client data heterogeneity slows convergence and reduces model accuracy, and frequent client participation imposes communication and computational burdens. To address these challenges, we propose FedABC, an innovative client selection algorithm designed to take a long-term view in managing data heterogeneity and optimizing client participation. Inspired by attention mechanisms, FedABC prioritizes informative clients by evaluating both model similarity and each model's unique contributions to the global model. Moreover, considering the evolving demands of the global model, we formulate an optimization problem to guide FedABC throughout the training process. Following the "later-is-better" principle, FedABC adaptively adjusts the client selection threshold, encouraging greater participation in later training stages. Extensive simulations on CIFAR-10 demonstrate that FedABC significantly outperforms existing approaches in model accuracy and client participation efficiency, achieving comparable performance with 32% fewer clients than the classical FL algorithm FedAvg, and 3.5% higher accuracy with 2% fewer clients than the state-of-the-art. This work marks a step toward deploying FL in heterogeneous, resource-constrained environments, thereby supporting native AI capabilities in 6G networks.
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Germany > North Rhine-Westphalia > Düsseldorf Region > Düsseldorf (0.04)
Pigeon-SL: Robust Split Learning Framework for Edge Intelligence under Malicious Clients
Park, Sangjun, Quek, Tony Q. S., Seo, Hyowoon
This work has been submitted to the IEEE for possible publication. Abstract --Recent advances in split learning (SL) have established it as a promising framework for privacy-preserving, communication-efficient distributed learning at the network edge. However, SL's sequential update process is vulnerable to even a single malicious client, which can significantly degrade model accuracy. T o address this, we introduce Pigeon-SL, a novel scheme grounded in the pigeonhole principle that guarantees at least one entirely honest cluster among M clients, even when up to N of them are adversarial. In each global round, the access point partitions the clients into N + 1 clusters, trains each cluster independently via vanilla SL, and evaluates their validation losses on a shared dataset. We further enhance training and communication efficiency with Pigeon-SL+, which repeats training on the selected cluster to match the update throughput of standard SL.
- North America > Canada > Ontario > Toronto (0.14)
- Asia > Singapore (0.04)
- North America > United States > Colorado > Denver County > Denver (0.04)
- (2 more...)
Multimodal Online Federated Learning with Modality Missing in Internet of Things
Wang, Heqiang, Liu, Xiang, Zhong, Xiaoxiong, Chen, Lixing, Liu, Fangming, Zhang, Weizhe
--The Internet of Things (IoT) ecosystem generates vast amounts of multimodal data from heterogeneous sources such as sensors, cameras, and microphones. As edge intelligence continues to evolve, IoT devices have progressed from simple data collection units to nodes capable of executing complex computational tasks. This evolution necessitates the adoption of distributed learning strategies to effectively handle multimodal data in an IoT environment. T o address these challenges, we introduce the concept of Multimodal Online Federated Learning (MMO-FL), a novel framework designed for dynamic and decentralized multimodal learning in IoT environments. Building on this framework, we further account for the inherent instability of edge devices, which frequently results in missing modalities during the learning process. We conduct a comprehensive theoretical analysis under both complete and missing modality scenarios, providing insights into the performance degradation caused by missing modalities. T o mitigate the impact of modality missing, we propose the Prototypical Modality Mitigation (PMM) algorithm, which leverages prototype learning to effectively compensate for missing modalities. Experimental results on two multimodal datasets further demonstrate the superior performance of PMM compared to benchmarks. The rapid expansion of the Internet of Things (IoT) [1] has led to an unprecedented surge in data generated by a multitude of interconnected devices, including smart home appliances [2], wearable health monitors [3], and industry sensors [4]. To enable intelligent services and applications across the IoT ecosystem, artificial intelligence techniques, particularly machine learning and deep learning, has become a fundamental tool for model training on large-scale IoT data. Traditionally, such training has been performed in centralized cloud platforms or data centers. However, this centralized paradigm faces significant challenges as both the scale of IoT data and the number of IoT devices continue to expand. Transferring large volumes of raw data to centralized servers imposes significant demands on network bandwidth and leads to substantial communication overhead, rendering it impractical for latency-sensitive applications such as autonomous driving [5] and real-time healthcare monitoring [6]. Additionally, uploading sensitive user data to the cloud raises serious privacy concerns [7]. L. Chen is with Shanghai Jiao Tong University, Shanghai, 200240, China. In this context, federated learning (FL) [8] has emerged as a promising distributed learning paradigm. FL enables collaborative model training across devices while keeping raw data local, offering a cost-effective and privacy-preserving alternative to traditional centralized learning.